Click Here!
home account info subscribe login search My ITKnowledge FAQ/help site map contact us


 
Brief Full
 Advanced
      Search
 Search Tips
To access the contents, click the chapter and section titles.

Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96

Bookmark It

Search this book:
 
Previous Table of Contents Next


Asynchronous I/O

Although the terms asynchronous I/O and synchronous I/O may seem to have similar meanings, they operate on completely different levels. In fact, the asynchronous I/Os used by Oracle must by synchronous.

With asynchronous I/O, the process requesting the I/O submits the I/O request and moves on. The OS response returns later. This arrangement allows a process (such as the DBWR process) to handle many outstanding I/Os at once without having to wait for each one to return. By using asynchronous I/Os, the DBWR can continue processing, submitting I/Os and retrieving the results of the I/Os later. With asynchronous I/O, the process submitting the requests is responsible for keeping track of all the I/Os it has outstanding.


NOTE:  When I say that “asynchronous I/Os are in fact synchronous,” what I mean is that the OS does not tell the process that the I/O has completed until the I/O has actually been written to the physical medium. Therefore, the asynchronous I/O uses synchronous writes.

Miscellaneous

Each OS has its own unique features and operational methods. Over the years, many new features have been used to improve the performance of Oracle in different ways on different OSes. Some of these features and improvements have similarities; others are unique to the specific operating system.

Post-Wait Semaphore

With some varieties of the UNIX operating system, it was found that during the course of normal processing, at the beginning of a clock tick, various processes would try to acquire a particular resource; after trying unsuccessfully for a while to get the resource, the processes would go to sleep. Eventually, every process would go to sleep except the one holding the resource. When that process was done (if it did not have anything else to do or if it submitted an I/O), it also would go to sleep. Because sleeping processes wake up only on the clock tick, the last portion of the clock cycle wound up with no processes running.

Oracle and some of the UNIX vendors got together to solve this problem by inventing a device called a post-wait semaphore. When this new type of semaphore is available, it replaces the use of standard semaphores. By using this semaphore, Oracle has more control over the UNIX sleep and wakeup routines: a process that is freeing a resource can wake up any processes that are waiting (sleeping) for that resource.

Scheduling and Preemption

Another area you may be able to influence is tuning the OS scheduler and preemption system to better match the needs of Oracle. Some OSes are designed for the general case and must be tuned to provide the highest level of performance for more specific cases. This is true with how the typical OS does scheduling and preemption.

Here is how the standard timesharing model of scheduling works: A process is scheduled based on its priority. The process’s priority may degrade over time based on how much CPU time it has already had or other factors. When a process is scheduled, it runs until some criteria is met. Here are some typical events that cause a process to relinquish the CPU:

  The process has finished all it needs to do.
  The process executes an I/O instruction that causes the process to go to sleep.
  The process has run for the maximum time slice interval.
  An interrupt occurs, which usually causes the process to be temporarily halted while the interrupt is serviced.
  The process is preempted by another process.

The final situation—when the process is preempted by another process—is most interesting to the database performance engineer. An OS designed for general processing makes some assumptions that are not necessarily true for all cases. One of these assumptions is that a lower-priority process should always be preempted by a higher-priority process.

Although this assumption may be true for long-running processes that never relinquish the CPU, rarely does an Oracle process run for very long before relinquishing the CPU voluntarily because of an I/O request.

When the process is preempted by a higher-priority process, it is most likely that the preempted process would have relinquished the CPU very soon anyway. What happens is that an unnecessary preemption occurs, wasting CPU and I/O resources.

To limit the waste of CPU and I/O resources caused by unnecessary preemption, several OS vendors have included tunable parameters that allow you to control preemption more closely. In many cases, you can turn off preemption and achieve greater performance by reducing process switches.

Cache Affinity

Cache affinity is another area that has received a lot of attention over the last few years. Cache affinity is a term used in multiprocessor OSes and refers to the act of trying to run a process on the same processor it last ran on. The theory behind cache affinity is that if there is any data from that process left in the CPU cache, running the process on the same cache results in a big win. For general-purpose computing, cache affinity may be a good thing.

However, when you are running a multiprocessor system with many users and heavy access patterns (such as a large OLTP system), it is extremely unlikely that any data is left in the CPU cache when it is time for your process to run again. What cache affinity does for a large multiprocessor system like this is to waste many CPU cycles in the OS scheduler trying to relocate a process for no good reason.

On the other hand, if you run a large application with a small number of processes (such as a DSS system), you may benefit from cache affinity. You will also see a benefit from cache affinity if you have a fair number of processes all running the same shared code.

Depending on your application and workload, you may find that many innovative OS features are either a benefit or extra overhead. By tuning only those features that are beneficial, and turning off those features that are just extra overhead, you may see some performance gains.


Previous Table of Contents Next


Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permission of EarthWeb is prohibited.